42 research outputs found

    Affective state classification through CMAC-based model of affects (CCMA) using SVM

    Get PDF
    A number of computational models have been proposed to perform emotion profiling through affective state classification using EEG signals. However, such models do not include both temporal and spatial dynamic of the signals. It is also observed that the performance of classifying emotion using the existing models produce high classification accuracy on one subject, but not on different subjects. Thus, in this paper CMAC-based Computational Model of Affects (CCMA) is proposed as feature extraction for the classification task. CCMA keeps the temporal and spatial dynamics of EEG signals to produce better classification performance. Using Support Vector Machine (SVM) as classifier, the features produce higher classification accuracy for heterogeneous test

    Extracting features using computational cerebellar model for emotion classification

    Get PDF
    Several feature extraction techniques have been employed to extract features from EEG signals for classifying emotions. Such techniques are not constructed based on the understanding of EEG and brain functions, neither inspired by the understanding of emotional dynamics. Hence, the features are difficult to be interpreted and yield low classification performance. In this study, a new feature extraction technique using Cerebellar Model Articulation Controller (CMAC) is proposed. The features are extracted from the weights of datadriven self-organizing feature map that are adjusted during training to optimize the error obtained from the desired output and the calculated output. Multi-Layer Perceptron (MLP) classifier is then employed to perform classification on fear, happiness, sadness and calm emotions. Experimental results show that the average accuracy of classifying emotions from EEG signals captured on 12 children aged between 4 to 6 years old ranging from 84.18% to 89.29%. In addition, classification performance for features derived from other techniques such as Power Spectrum Density (PSD), Kernel Density Estimation (KDE) and Mel-Frequency Cepstral Coefficients (MFCC) are also presented as a standard benchmark for comparison purpose. It is observed that the proposed approach is able to yield accuracy of 33.77% to 55% as compared to the respective comparison features. The experimental results indicated that the proposed approach has potential for comparative emotion recognition accuracy when coupled with MLP

    Investment decisions based on EEG emotion recognition

    Get PDF
    In the recent years, computational neuroscience which is a study on the brain functions was frequently applied to discover interesting patterns in the investment decisions. Emotions in neurofinance study have been measured by sentiments analysis but not measured by biosignal. Behavioural finance affects investorsโ€˜ performance which is also influenced by their emotional or cognitive errors in taking the investment decisions. This paper focused on the EEG-based emotion recognition recorded while making decisions that can also be helpful in investmentโ€™s returns. The features were extracted by using Mel Frequency Cepstal Coefficient (MFCC) and the classification used the Multi-Layer Perceptron (MLP) classifier. The EEG-based emotion recognition was tested by using the dimensional models of emotions, 12-PAC and rSASM, and also the Radboud Faces Database (RaFD). Results show that investment decisions can be driven by the emotions of the investor and some measurement should be taken before they lose their money

    Sentiment analysis for Malay language: systematic literature review

    Get PDF
    Recent research and developments in Sentiment Analysis (SA) have simplified sentiment detection and classification from textual content. The related domains for these studies are diverse and comprise fields such as tourism, costumer review, finance, software engineering, speech conversation, social media content, news and so on. SA research and developments field have been done on various languages such as Chinese and English language. However, SA research on other languages such as Malay language is still scarce. Thus, there is a need for constructing SA research specifically for Malay language. To understand trends and to support practitioners and researchers with comprehension information with regard to SA for Malay language, this study exhibit to review published articles on SA for Malay language. From five online databases including ACM, Emerald insight, IEEE Xplore, Science Direct, and Scopus, 2433 scientific articles were obtained. Moreover, through the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement, 10 articles have been chosen for the review process. Those articles have been reviewed depend on a few categories consisting of the aim of the study, SA classification techniques, as well as the domain and source of content. As a result, the conducted systematic literature review shed some light about the starting point to research in term of SA for Malay language

    Malay Online Virtual Integrated Corpus (MOVIC): a systematic review

    Get PDF
    The development of the various Malay corpora have given the opportunities to many researchers to explore their usage in diverse contexts. However, the corpora were distributed in various locations, and for the ease of access for users, a system called Malay Online Virtual Integrated Corpus (MOVIC) is proposed. This paper focuses on applying the systematic literature review (SLR) on the Malay corpus research to find out the recent development in the area. From the initial search, 3231 articles were extracted from five online databases, such as, IEEE Xplore, Scopus, ProQuest, Springer Link, and ACM. After several rounds of filtering, 11 papers were selected for review

    Emotion recognition using electroencephalogram signal

    Get PDF
    Emotion play an essential role in humanโ€™s life and it is not consciously controlled. Some of the emotion can be easily expressed by facial expressions, speech, behavior and gesture but some are not. This study investigates the emotion recognition using electroencephalogram (EEG) signal. Undoubtedly, EEG signals can detect human brain activity accurately with high resolution data acquisition device as compared to other biological signals. Changes in the human brainโ€™s electrical activity occur very quickly, thus a high resolution device is required to determine the emotion precisely. In this study, we will prove the strength and reliability of EEG signals as an emotion recognition mechanism for four different emotions which are happy, sad, fear and calm. Data of six different subjects were collected by using BrainMarker EXG device which consist of 19 channels. The pre-processing stage was performed using second order of low pass Butterworth filter to remove the unwanted signals. Then, two ranges of frequency bands were extracted from the signals which are alpha and beta. Finally, these samples will be classified using MLP Neural Network. Classification accuracy up to 91% is achieved and the average percentage of accuracy for calm, fear, happy and sad are 83.5%, 87.3%, 85.83% and 87.6% respectively. Thus, a proof of concept, this study has been capable of proposing a system of recognizing four states of emotion which are happy, sad, fear and calm by using EEG signal

    A Real-Time Brain-Computer Interface (BCI) framework for sleep state stimulation using a deep-learning technique: proposal

    Get PDF
    Abstractโ€” Sleep disturbance can cause mental illnesses such as depression, hypertension, metabolic syndrome, and cognitive impairment. To date, various methods have been proposed as intervention measures for sleep disturbance, including taking a short mid-day nap. Falling asleep depends on several external factors, such as the ambience, temperature, sound, and lighting. On top of that, the factors that affect the quality and period of falling asleep can be subjective. The attempt to provide feedback based on the configuration of those external factors is time-consuming. Additionally, if those external factors are incorrectly configured, the intended short nap as a solution may have the opposite effects. As such, research on real-time sleep analysis plays an important role. However, the current study on deep-learning techniques regarding the sleep analysis that can give real-time results is still scarce compared to the offline sleep analysis. Therefore, this study aims to design and develop a real-time BCI framework for sleep state stimulation

    EEG based biometric identification using correlation and MLPNN models

    Get PDF
    This study investigates the capability of electroencephalogram (EEG) signals to be used for biometric identification. In the context of biometric, recently, researchers have been focusing more on biomedical signals to substitute the biometric modalities that are being used nowadays as the signals obtained from our bodies is considered more secure and privacy-compliant. The EEG signals of 6 subjects were collected where the subjects were required to undergo two baseline experiments which are, eyes open (EO) and eyes closed (EC). The signals were processed using a 2nd order Butterworth filter to eliminate the unwanted noise in the signals. Then, Daubechies (db8) wavelet was applied to the signals in the feature extraction stage and from there, Power Spectral Density (PSD) of alpha and beta waves was computed. Finally, the correlation model and Multilayer Perceptron Neural Network (MLPNN) was applied to classify the EEG signals of each subject. Correlation model has yielded great significant difference of coefficient between autocorrelation and crosscorrelation where it gives the coefficient value of 1 for autocorrelation and the coefficient value of less than 0.35 for cross-correlation. On the other hand, the MLPNN model gives an accuracy of 75.8% and 71.5% for classification during EO and EC baseline condition respectively. Therefore, these results support the usability of EEG signals in biometric recognition

    Application of artificial intelligence techniques for brain-computer interface in mental fatigue detection: a systematic review (2011-2022)

    Get PDF
    Mental fatigue is a psychophysical condition with a significant adverse effect on daily life, compromising both physical and mental wellness. We are experiencing challenges in this fast-changing environment, and mental fatigue problems are becoming more prominent. This demands an urgent need to explore an effective and accurate automated system for timely mental fatigue detection. Therefore, we present a systematic review of brain-computer interface (BCI) studies for mental fatigue detection using artificial intelligent (AI) techniques published in Scopus, IEEE Explore, PubMed and Web of Science (WOS) between 2011 and 2022. The Boolean search expression that comprised (((ELECTROENCEPHALOGRAM) AND (BCI)) AND (FATIGUE CLASSIFICATION)) AND (BRAIN-COMPUTER INTERFACE) has been used to select the articles. Through the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology, we selected 39 out of 562 articles. Our review identified the research gap in employing BCI for mental fatigue intervention through automated neurofeedback. The AI techniques employed to develop EEG-based mental fatigue detection are discussed. We have presented comprehensive challenges and future recommendations from the gaps identified in discussions. The future direction includes data fusion, hybrid classification models, availability of public datasets, uncertainty, explainability, and hardware implementation strategies
    corecore